42 research outputs found

    DNS weighted footprints for web browsing analytics

    Full text link
    The monetization of the large amount of data that ISPs have of their users is still in early stages. Specifically, the knowledge of the websites that specific users or aggregates of users visit opens new opportunities of business, after the convenient sanitization. However, the construction of accurate DNS-based web-user profiles on large networks is a challenge not only because the requirements that capturing traffic entails, but also given the use of DNS caches, the proliferation of botnets and the complexity of current websites (i.e., when a user visit a website a set of self-triggered DNS queries for banners, from both same company and third parties services, as well for some preloaded and prefetching contents are in place). In this way, we propose to count the intentional visits users make to websites by means of DNS weighted footprints. Such novel approach consists of considering that a website was actively visited if an empirical-estimated fraction of the DNS queries of both the own website and the set of self-triggered websites are found. This approach has been coded in a final system named DNSprints. After its parameterization (i.e., balancing the importance of a website in a footprint with respect to the total set of footprints), we have measured that our proposal is able to identify visits and their durations with false and true positives rates between 2 and 9% and over 90%, respectively, at throughputs between 800,000 and 1.4 million DNS packets per second in diverse scenarios, thus proving both its refinement and applicabilityThe authors would like to acknowledge funding received through TRAFICA (TEC2015-69417-C2-1-R) grant from the Spanish R&D programme. The authors thank VĂ­ctor Uceda for his collaboration in the early stages of this wor

    NATRA: Network ACK-Based Traffic Reduction Algorithm

    Get PDF
    Traffic monitoring involves packet capturing and processing at a very high rate of packets per second. Typically, flow records are generated from the packet traffic, such as TCP flow records that feature the number of bytes and packets in each direction, flow duration, number of different ports, and other metrics. Delivering such flow records, about network traffic flowing at tens of Gbps is rather challenging in terms of processing power. To address this problem, traffic thinning can be applied to reduce the input load, by swiftly discarding useless packets at the sniffer NIC or driver level, which effectively reduces the load on software layers that handle traffic processing. This work proposes an algorithm that drops empty ACK packets from TCP traffic, thus achieving a significant reduction in the packets per second that must be handled by each traffic module. The tests discussed below show that the algorithm achieves a 25% decrease in the packets per second rate with minimal information loss.This work was supported by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund through the projects TRAFICA under Grant MINECO/FEDER TEC2015-69417-C2-1-R and Procesado Inteligente de Tráfico under Grant MINECO/FEDER TEC2015-69417-C2-2-R

    Workforce capacity planning for proactive troubleshooting in the Network Operations Center

    Full text link
    Modern data centers require that a Network Operations Center is continuously monitoring network health, desirably in order to take proactive action before potential trouble occurs. In this paper, we contribute to the capacity planning of the workforce in charge. To this end, we have extensively analyzed, with real-world data, behavioral changes in a large server population in a data center. Our findings allow classifying such behavioral changes, which may be indicative of potential trouble, into relevance regions using a ranking mechanism. Then, the proposed methodology allows, together with an estimation of the time to analyze, assessing the workforce necessary to proactively tackle the behavioral changes observed. We conclude with a case study from a working data center, including a hands-on implementation of a traffic analysis solution to detect such behavioral changes and an estimation of the needed workforce to analyze them. Our results show that between 4 and 5 network managers are an adequate number for handling behavioral-changes analysis in a large enterprise data centerThis research has been partially funded the Ministry of Science and Innovation of Spain through Project AGILEMON under Grant AEI PID2019-104451RB-C21 and by Naudit High Performance Computing and Networking under art. 83. project

    Estimation of the parameters of token-buckets in multi-hop environments

    Full text link
    Bandwidth verification in shaping scenarios receives much attention of both operators and clients because of its impact on Quality of Service (QoS). As a result, measuring shapers’ parameters, namely the Committed Information Rate (CIR), Peak Information Rate (PIR) and Maximum Burst Size (MBS), is a relevant issue when it comes to assess QoS. In this paper, we present a novel algorithm, TBCheck, which serves to accurately measure such parameters with minimal intrusiveness. These measurements are the cornerstone for the validation of Service Level Agreements (SLA) with multiple shaping elements along an end-to-end path. As a further outcome of this measurement method, we define a formal taxonomy of multi-hop shaping scenarios. A thorough performance evaluation covering the latter taxonomy shows the advantages of TBCheck compared to other tools in the state of the art, yielding more accurate results even in the presence of cross-traffic. Additionally, our findings show that MBS estimation is unfeasible when the link load is high, regardless the measurement technique, because the token-bucket will always be empty. Consequently, we propose an estimation policy which maximizes the accuracy by measuring CIR during busy hours and PIR and MBS during off-peak hoursThis work was partially supported by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund under the project Tráfica (MINECO/FEDER TEC2015-69417-C2-1-R

    E2E-OAM in convergent sub-wavelength-MPLS environments

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. J. Fernandez-Palacios, J. Aracil, M. Basham, and M. Georgiades, "E2E-OAM in convergent sub-wavelength-MPLS environments", in Future Network and Mobile Summit, 2012, pp. 1-11This paper presents an End-to-End (E2E) Operations, Administration, and Maintenance (OAM) architecture for Telco networks including a Sub-wavelength domain. It addresses two main issues: compatibility between MPLS networks and different Sub-wavelength technologies, and scalability of the OAM flows across the whole network. The case for OPST Sub-wavelength technology in the data plane has been studied extensively, however this is the first study on a methodology to scale the number of OAM flows in an E2E scenario combing both subwavelength and MPLS switching domains. Finally the inter-carrier issue in E2E OAM is also explored
    corecore